Your browser doesn't support javascript.
Show: 20 | 50 | 100
Results 1 - 20 de 226
Filter
1.
Comput Math Methods Med ; 2023: 7091301, 2023.
Article in English | MEDLINE | ID: covidwho-20243039

ABSTRACT

Medical imaging refers to the process of obtaining images of internal organs for therapeutic purposes such as discovering or studying diseases. The primary objective of medical image analysis is to improve the efficacy of clinical research and treatment options. Deep learning has revamped medical image analysis, yielding excellent results in image processing tasks such as registration, segmentation, feature extraction, and classification. The prime motivations for this are the availability of computational resources and the resurgence of deep convolutional neural networks. Deep learning techniques are good at observing hidden patterns in images and supporting clinicians in achieving diagnostic perfection. It has proven to be the most effective method for organ segmentation, cancer detection, disease categorization, and computer-assisted diagnosis. Many deep learning approaches have been published to analyze medical images for various diagnostic purposes. In this paper, we review the work exploiting current state-of-the-art deep learning approaches in medical image processing. We begin the survey by providing a synopsis of research works in medical imaging based on convolutional neural networks. Second, we discuss popular pretrained models and general adversarial networks that aid in improving convolutional networks' performance. Finally, to ease direct evaluation, we compile the performance metrics of deep learning models focusing on COVID-19 detection and child bone age prediction.


Subject(s)
COVID-19 , Deep Learning , Child , Humans , Diagnostic Imaging/methods , Neural Networks, Computer , Image Processing, Computer-Assisted/methods
2.
Biomed Res Int ; 2023: 1632992, 2023.
Article in English | MEDLINE | ID: covidwho-2323857

ABSTRACT

Artificial intelligence (AI) scholars and mediciners have reported AI systems that accurately detect medical imaging and COVID-19 in chest images. However, the robustness of these models remains unclear for the segmentation of images with nonuniform density distribution or the multiphase target. The most representative one is the Chan-Vese (CV) image segmentation model. In this paper, we demonstrate that the recent level set (LV) model has excellent performance on the detection of target characteristics from medical imaging relying on the filtering variational method based on the global medical pathology facture. We observe that the capability of the filtering variational method to obtain image feature quality is better than other LV models. This research reveals a far-reaching problem in medical-imaging AI knowledge detection. In addition, from the analysis of experimental results, the algorithm proposed in this paper has a good effect on detecting the lung region feature information of COVID-19 images and also proves that the algorithm has good adaptability in processing different images. These findings demonstrate that the proposed LV method should be seen as an effective clinically adjunctive method using machine-learning healthcare models.


Subject(s)
Artificial Intelligence , COVID-19 , Humans , COVID-19/diagnostic imaging , Diagnostic Imaging , Algorithms , Models, Theoretical , Image Processing, Computer-Assisted/methods
3.
PLoS One ; 18(5): e0285211, 2023.
Article in English | MEDLINE | ID: covidwho-2320346

ABSTRACT

Aerial photography is a long-range, non-contact method of target detection technology that enables qualitative or quantitative analysis of the target. However, aerial photography images generally have certain chromatic aberration and color distortion. Therefore, effective segmentation of aerial images can further enhance the feature information and reduce the computational difficulty for subsequent image processing. In this paper, we propose an improved version of Golden Jackal Optimization, which is dubbed Helper Mechanism Based Golden Jackal Optimization (HGJO), to apply multilevel threshold segmentation to aerial images. The proposed method uses opposition-based learning to boost population diversity. And a new approach to calculate the prey escape energy is proposed to improve the convergence speed of the algorithm. In addition, the Cauchy distribution is introduced to adjust the original update scheme to enhance the exploration capability of the algorithm. Finally, a novel "helper mechanism" is designed to improve the performance for escape the local optima. To demonstrate the effectiveness of the proposed algorithm, we use the CEC2022 benchmark function test suite to perform comparison experiments. the HGJO is compared with the original GJO and five classical meta-heuristics. The experimental results show that HGJO is able to achieve competitive results in the benchmark test set. Finally, all of the algorithms are applied to the experiments of variable threshold segmentation of aerial images, and the results show that the aerial photography images segmented by HGJO beat the others. Noteworthy, the source code of HGJO is publicly available at https://github.com/Vang-z/HGJO.


Subject(s)
Algorithms , Jackals , Animals , Image Processing, Computer-Assisted/methods , Software , Photography
4.
Math Biosci Eng ; 20(6): 10954-10976, 2023 Apr 21.
Article in English | MEDLINE | ID: covidwho-2319238

ABSTRACT

For the problems of blurred edges, uneven background distribution, and many noise interferences in medical image segmentation, we proposed a medical image segmentation algorithm based on deep neural network technology, which adopts a similar U-Net backbone structure and includes two parts: encoding and decoding. Firstly, the images are passed through the encoder path with residual and convolutional structures for image feature information extraction. We added the attention mechanism module to the network jump connection to address the problems of redundant network channel dimensions and low spatial perception of complex lesions. Finally, the medical image segmentation results are obtained using the decoder path with residual and convolutional structures. To verify the validity of the model in this paper, we conducted the corresponding comparative experimental analysis, and the experimental results show that the DICE and IOU of the proposed model are 0.7826, 0.9683, 0.8904, 0.8069, and 0.9462, 0.9537 for DRIVE, ISIC2018 and COVID-19 CT datasets, respectively. The segmentation accuracy is effectively improved for medical images with complex shapes and adhesions between lesions and normal tissues.


Subject(s)
COVID-19 , Deep Learning , Humans , COVID-19/diagnostic imaging , Algorithms , Technology , Tomography, X-Ray Computed , Image Processing, Computer-Assisted
5.
Comput Biol Med ; 161: 106932, 2023 07.
Article in English | MEDLINE | ID: covidwho-2311800

ABSTRACT

Attention mechanism-based medical image segmentation methods have developed rapidly recently. For the attention mechanisms, it is crucial to accurately capture the distribution weights of the effective features contained in the data. To accomplish this task, most attention mechanisms prefer using the global squeezing approach. However, it will lead to a problem of over-focusing on the global most salient effective features of the region of interest, while suppressing the secondary salient ones. Making partial fine-grained features are abandoned directly. To address this issue, we propose to use a multiple-local perception method to aggregate global effective features, and design a fine-grained medical image segmentation network, named FSA-Net. This network consists of two key components: 1) the novel Separable Attention Mechanisms which replace global squeezing with local squeezing to release the suppressed secondary salient effective features. 2) a Multi-Attention Aggregator (MAA) which can fuse multi-level attention to efficiently aggregate task-relevant semantic information. We conduct extensive experimental evaluations on five publicly available medical image segmentation datasets: MoNuSeg, COVID-19-CT100, GlaS, CVC-ClinicDB, ISIC2018, and DRIVE datasets. Experimental results show that FSA-Net outperforms state-of-the-art methods in medical image segmentation.


Subject(s)
COVID-19 , Humans , COVID-19/diagnostic imaging , Semantics , Image Processing, Computer-Assisted
6.
Comput Biol Med ; 157: 106726, 2023 05.
Article in English | MEDLINE | ID: covidwho-2309093

ABSTRACT

Deep learning-based methods have become the dominant methodology in medical image processing with the advancement of deep learning in natural image classification, detection, and segmentation. Deep learning-based approaches have proven to be quite effective in single lesion recognition and segmentation. Multiple-lesion recognition is more difficult than single-lesion recognition due to the little variation between lesions or the too wide range of lesions involved. Several studies have recently explored deep learning-based algorithms to solve the multiple-lesion recognition challenge. This paper includes an in-depth overview and analysis of deep learning-based methods for multiple-lesion recognition developed in recent years, including multiple-lesion recognition in diverse body areas and recognition of whole-body multiple diseases. We discuss the challenges that still persist in the multiple-lesion recognition tasks by critically assessing these efforts. Finally, we outline existing problems and potential future research areas, with the hope that this review will help researchers in developing future approaches that will drive additional advances.


Subject(s)
Deep Learning , Image Processing, Computer-Assisted/methods , Algorithms
7.
Comput Biol Med ; 156: 106718, 2023 04.
Article in English | MEDLINE | ID: covidwho-2308968

ABSTRACT

Cardiovascular diseases (CVD), as the leading cause of death in the world, poses a serious threat to human health. The segmentation of carotid Lumen-intima interface (LII) and Media-adventitia interface (MAI) is a prerequisite for measuring intima-media thickness (IMT), which is of great significance for early screening and prevention of CVD. Despite recent advances, existing methods still fail to incorporate task-related clinical domain knowledge and require complex post-processing steps to obtain fine contours of LII and MAI. In this paper, a nested attention-guided deep learning model (named NAG-Net) is proposed for accurate segmentation of LII and MAI. The NAG-Net consists of two nested sub-networks, the Intima-Media Region Segmentation Network (IMRSN) and the LII and MAI Segmentation Network (LII-MAISN). It innovatively incorporates task-related clinical domain knowledge through the visual attention map generated by IMRSN, enabling LII-MAISN to focus more on the clinician's visual focus region under the same task during segmentation. Moreover, the segmentation results can directly obtain fine contours of LII and MAI through simple refinement without complicated post-processing steps. To further improve the feature extraction ability of the model and reduce the impact of data scarcity, the strategy of transfer learning is also adopted to apply the pretrained weights of VGG-16. In addition, a channel attention-based encoder feature fusion block (EFFB-ATT) is specially designed to achieve efficient representation of useful features extracted by two parallel encoders in LII-MAISN. Extensive experimental results have demonstrated that our proposed NAG-Net outperformed other state-of-the-art methods and achieved the highest performance on all evaluation metrics.


Subject(s)
Cardiovascular Diseases , Carotid Intima-Media Thickness , Humans , Adventitia/diagnostic imaging , Carotid Arteries/diagnostic imaging , Tunica Intima/diagnostic imaging , Image Processing, Computer-Assisted/methods
8.
Med Image Anal ; 86: 102787, 2023 05.
Article in English | MEDLINE | ID: covidwho-2308518

ABSTRACT

X-ray computed tomography (CT) and positron emission tomography (PET) are two of the most commonly used medical imaging technologies for the evaluation of many diseases. Full-dose imaging for CT and PET ensures the image quality but usually raises concerns about the potential health risks of radiation exposure. The contradiction between reducing the radiation exposure and remaining diagnostic performance can be addressed effectively by reconstructing the low-dose CT (L-CT) and low-dose PET (L-PET) images to the same high-quality ones as full-dose (F-CT and F-PET). In this paper, we propose an Attention-encoding Integrated Generative Adversarial Network (AIGAN) to achieve efficient and universal full-dose reconstruction for L-CT and L-PET images. AIGAN consists of three modules: the cascade generator, the dual-scale discriminator and the multi-scale spatial fusion module (MSFM). A sequence of consecutive L-CT (L-PET) slices is first fed into the cascade generator that integrates with a generation-encoding-generation pipeline. The generator plays the zero-sum game with the dual-scale discriminator for two stages: the coarse and fine stages. In both stages, the generator generates the estimated F-CT (F-PET) images as like the original F-CT (F-PET) images as possible. After the fine stage, the estimated fine full-dose images are then fed into the MSFM, which fully explores the inter- and intra-slice structural information, to output the final generated full-dose images. Experimental results show that the proposed AIGAN achieves the state-of-the-art performances on commonly used metrics and satisfies the reconstruction needs for clinical standards.


Subject(s)
Image Processing, Computer-Assisted , Positron-Emission Tomography , Humans , Image Processing, Computer-Assisted/methods , Positron-Emission Tomography/methods , Tomography, X-Ray Computed/methods , Attention
9.
Sci Rep ; 13(1): 6762, 2023 04 25.
Article in English | MEDLINE | ID: covidwho-2297227

ABSTRACT

In recent years, there have been several solutions to medical image segmentation, such as U-shaped structure, transformer-based network, and multi-scale feature learning method. However, their network parameters and real-time performance are often neglected and cannot segment boundary regions well. The main reason is that such networks have deep encoders, a large number of channels, and excessive attention to local information rather than global information, which is crucial to the accuracy of image segmentation. Therefore, we propose a novel multi-branch medical image segmentation network MBSNet. We first design two branches using a parallel residual mixer (PRM) module and dilate convolution block to capture the local and global information of the image. At the same time, a SE-Block and a new spatial attention module enhance the output features. Considering the different output features of the two branches, we adopt a cross-fusion method to effectively combine and complement the features between different layers. MBSNet was tested on five datasets ISIC2018, Kvasir, BUSI, COVID-19, and LGG. The combined results show that MBSNet is lighter, faster, and more accurate. Specifically, for a [Formula: see text] input, MBSNet's FLOPs is 10.68G, with an F1-Score of [Formula: see text] on the Kvasir test dataset, well above [Formula: see text] for UNet++ with FLOPs of 216.55G. We also use the multi-criteria decision making method TOPSIS based on F1-Score, IOU and Geometric-Mean (G-mean) for overall analysis. The proposed MBSNet model performs better than other competitive methods. Code is available at https://github.com/YuLionel/MBSNet .


Subject(s)
COVID-19 , Household Articles , Humans , Learning , Electric Power Supplies , Image Processing, Computer-Assisted
10.
Sensors (Basel) ; 23(7)2023 Apr 05.
Article in English | MEDLINE | ID: covidwho-2296272

ABSTRACT

As the most popular technologies of the 21st century, artificial intelligence (AI) and the internet of things (IoT) are the most effective paradigms that have played a vital role in transforming the agricultural industry during the pandemic. The convergence of AI and IoT has sparked a recent wave of interest in artificial intelligence of things (AIoT). An IoT system provides data flow to AI techniques for data integration and interpretation as well as for the performance of automatic image analysis and data prediction. The adoption of AIoT technology significantly transforms the traditional agriculture scenario by addressing numerous challenges, including pest management and post-harvest management issues. Although AIoT is an essential driving force for smart agriculture, there are still some barriers that must be overcome. In this paper, a systematic literature review of AIoT is presented to highlight the current progress, its applications, and its advantages. The AIoT concept, from smart devices in IoT systems to the adoption of AI techniques, is discussed. The increasing trend in article publication regarding to AIoT topics is presented based on a database search process. Lastly, the challenges to the adoption of AIoT technology in modern agriculture are also discussed.


Subject(s)
Agriculture , Artificial Intelligence , Technology , Databases, Factual , Image Processing, Computer-Assisted
11.
Comput Biol Med ; 158: 106892, 2023 05.
Article in English | MEDLINE | ID: covidwho-2293243

ABSTRACT

Vessel segmentation is significant for characterizing vascular diseases, receiving wide attention of researchers. The common vessel segmentation methods are mainly based on convolutional neural networks (CNNs), which have excellent feature learning capabilities. Owing to inability to predict learning direction, CNNs generate large channels or sufficient depth to obtain sufficient features. It may engender redundant parameters. Drawing on performance ability of Gabor filters in vessel enhancement, we built Gabor convolution kernel and designed its optimization. Unlike traditional filter using and common modulation, its parameters are automatically updated using gradients in the back propagation. Since the structural shape of Gabor convolution kernels is the same as that of regular convolution kernels, it can be integrated into any CNNs architecture. We built Gabor ConvNet using Gabor convolution kernels and tested it using three vessel datasets. It scored 85.06%, 70.52% and 67.11%, respectively, ranking first on three datasets. Results shows that our method outperforms advanced models in vessel segmentation. Ablations also proved that Gabor kernel has better vessel extraction ability than the regular convolution kernel.


Subject(s)
Algorithms , Neural Networks, Computer , Image Processing, Computer-Assisted/methods
12.
Comput Biol Med ; 154: 106555, 2023 03.
Article in English | MEDLINE | ID: covidwho-2288631

ABSTRACT

Hypopharyngeal cancer (HPC) is a rare disease. Therefore, it is a challenge to automatically segment HPC tumors and metastatic lymph nodes (HPC risk areas) from medical images with the small-scale dataset. Combining low-level details and high-level semantics from feature maps in different scales can improve the accuracy of segmentation. Herein, we propose a Multi-Modality Transfer Learning Network with Hybrid Bilateral Encoder (Twist-Net) for Hypopharyngeal Cancer Segmentation. Specifically, we propose a Bilateral Transition (BT) block and a Bilateral Gather (BG) block to twist (fuse) high-level semantic feature maps and low-level detailed feature maps. We design a block with multi-receptive field extraction capabilities, M Block, to capture multi-scale information. To avoid overfitting caused by the small scale of the dataset, we propose a transfer learning method that can transfer priors experience from large computer vision datasets to multi-modality medical imaging datasets. Compared with other methods, our method outperforms other methods on HPC dataset, achieving the highest Dice of 82.98%. Our method is also superior to other methods on two public medical segmentation datasets, i.e., the CHASE_DB1 dataset and BraTS2018 dataset. On these two datasets, the Dice of our method is 79.83% and 84.87%, respectively. The code is available at: https://github.com/zhongqiu1245/TwistNet.


Subject(s)
Hypopharyngeal Neoplasms , Humans , Hypopharyngeal Neoplasms/diagnostic imaging , Learning , Rare Diseases , Semantics , Machine Learning , Image Processing, Computer-Assisted
13.
Comput Biol Med ; 157: 106683, 2023 05.
Article in English | MEDLINE | ID: covidwho-2264789

ABSTRACT

-Thoracic disease, like many other diseases, can lead to complications. Existing multi-label medical image learning problems typically include rich pathological information, such as images, attributes, and labels, which are crucial for supplementary clinical diagnosis. However, the majority of contemporary efforts exclusively focus on regression from input to binary labels, ignoring the relationship between visual features and semantic vectors of labels. In addition, there is an imbalance in data amount between diseases, which frequently causes intelligent diagnostic systems to make erroneous disease predictions. Therefore, we aim to improve the accuracy of the multi-label classification of chest X-ray images. Chest X-ray14 pictures were utilized as the multi-label dataset for the experiments in this study. By fine-tuning the ConvNeXt network, we got visual vectors, which we combined with semantic vectors encoded by BioBert to map the two different forms of features into a common metric space and made semantic vectors the prototype of each class in metric space. The metric relationship between images and labels is then considered from the image level and disease category level, respectively, and a new dual-weighted metric loss function is proposed. Finally, the average AUC score achieved in the experiment reached 0.826, and our model outperformed the comparison models.


Subject(s)
Deep Learning , X-Rays , Image Processing, Computer-Assisted/methods , Thorax , Semantics
14.
Sci Rep ; 13(1): 5359, 2023 04 01.
Article in English | MEDLINE | ID: covidwho-2278125

ABSTRACT

Coronavirus 2019 (COVID-19) is a new acute respiratory disease that has spread rapidly throughout the world. This paper proposes a novel deep learning network based on ResNet-50 merged transformer named RMT-Net. On the backbone of ResNet-50, it uses Transformer to capture long-distance feature information, adopts convolutional neural networks and depth-wise convolution to obtain local features, reduce the computational cost and acceleration the detection process. The RMT-Net includes four stage blocks to realize the feature extraction of different receptive fields. In the first three stages, the global self-attention method is adopted to capture the important feature information and construct the relationship between tokens. In the fourth stage, the residual blocks are used to extract the details of feature. Finally, a global average pooling layer and a fully connected layer perform classification tasks. Training, verification and testing are carried out on self-built datasets. The RMT-Net model is compared with ResNet-50, VGGNet-16, i-CapsNet and MGMADS-3. The experimental results show that the RMT-Net model has a Test_ acc of 97.65% on the X-ray image dataset, 99.12% on the CT image dataset, which both higher than the other four models. The size of RMT-Net model is only 38.5 M, and the detection speed of X-ray image and CT image is 5.46 ms and 4.12 ms per image, respectively. It is proved that the model can detect and classify COVID-19 with higher accuracy and efficiency.


Subject(s)
COVID-19 , Delayed Emergence from Anesthesia , Humans , COVID-19/diagnostic imaging , Algorithms , Neural Networks, Computer , Acceleration , Image Processing, Computer-Assisted
15.
Sensors (Basel) ; 23(5)2023 Feb 24.
Article in English | MEDLINE | ID: covidwho-2269783

ABSTRACT

Medical images are used as an important basis for diagnosing diseases, among which CT images are seen as an important tool for diagnosing lung lesions. However, manual segmentation of infected areas in CT images is time-consuming and laborious. With its excellent feature extraction capabilities, a deep learning-based method has been widely used for automatic lesion segmentation of COVID-19 CT images. However, the segmentation accuracy of these methods is still limited. To effectively quantify the severity of lung infections, we propose a Sobel operator combined with multi-attention networks for COVID-19 lesion segmentation (SMA-Net). In our SMA-Net method, an edge feature fusion module uses the Sobel operator to add edge detail information to the input image. To guide the network to focus on key regions, SMA-Net introduces a self-attentive channel attention mechanism and a spatial linear attention mechanism. In addition, the Tversky loss function is adopted for the segmentation network for small lesions. Comparative experiments on COVID-19 public datasets show that the average Dice similarity coefficient (DSC) and joint intersection over union (IOU) of the proposed SMA-Net model are 86.1% and 77.8%, respectively, which are better than those in most existing segmentation networks.


Subject(s)
COVID-19 , Labor, Obstetric , Pregnancy , Female , Humans , Image Processing, Computer-Assisted
16.
Med Image Anal ; 86: 102797, 2023 05.
Article in English | MEDLINE | ID: covidwho-2252781

ABSTRACT

Since the emergence of the Covid-19 pandemic in late 2019, medical imaging has been widely used to analyze this disease. Indeed, CT-scans of the lungs can help diagnose, detect, and quantify Covid-19 infection. In this paper, we address the segmentation of Covid-19 infection from CT-scans. To improve the performance of the Att-Unet architecture and maximize the use of the Attention Gate, we propose the PAtt-Unet and DAtt-Unet architectures. PAtt-Unet aims to exploit the input pyramids to preserve the spatial awareness in all of the encoder layers. On the other hand, DAtt-Unet is designed to guide the segmentation of Covid-19 infection inside the lung lobes. We also propose to combine these two architectures into a single one, which we refer to as PDAtt-Unet. To overcome the blurry boundary pixels segmentation of Covid-19 infection, we propose a hybrid loss function. The proposed architectures were tested on four datasets with two evaluation scenarios (intra and cross datasets). Experimental results showed that both PAtt-Unet and DAtt-Unet improve the performance of Att-Unet in segmenting Covid-19 infections. Moreover, the combination architecture PDAtt-Unet led to further improvement. To Compare with other methods, three baseline segmentation architectures (Unet, Unet++, and Att-Unet) and three state-of-the-art architectures (InfNet, SCOATNet, and nCoVSegNet) were tested. The comparison showed the superiority of the proposed PDAtt-Unet trained with the proposed hybrid loss (PDEAtt-Unet) over all other methods. Moreover, PDEAtt-Unet is able to overcome various challenges in segmenting Covid-19 infections in four datasets and two evaluation scenarios.


Subject(s)
COVID-19 , Pandemics , Humans , Tomography, X-Ray Computed , Image Processing, Computer-Assisted
17.
IEEE Trans Med Imaging ; 41(12): 3812-3823, 2022 Dec.
Article in English | MEDLINE | ID: covidwho-2288807

ABSTRACT

The accurate segmentation of multiple types of lesions from adjacent tissues in medical images is significant in clinical practice. Convolutional neural networks (CNNs) based on the coarse-to-fine strategy have been widely used in this field. However, multi-lesion segmentation remains to be challenging due to the uncertainty in size, contrast, and high interclass similarity of tissues. In addition, the commonly adopted cascaded strategy is rather demanding in terms of hardware, which limits the potential of clinical deployment. To address the problems above, we propose a novel Prior Attention Network (PANet) that follows the coarse-to-fine strategy to perform multi-lesion segmentation in medical images. The proposed network achieves the two steps of segmentation in a single network by inserting a lesion-related spatial attention mechanism in the network. Further, we also propose the intermediate supervision strategy for generating lesion-related attention to acquire the regions of interest (ROIs), which accelerates the convergence and obviously improves the segmentation performance. We have investigated the proposed segmentation framework in two applications: 2D segmentation of multiple lung infections in lung CT slices and 3D segmentation of multiple lesions in brain MRIs. Experimental results show that in both 2D and 3D segmentation tasks our proposed network achieves better performance with less computational cost compared with cascaded networks. The proposed network can be regarded as a universal solution to multi-lesion segmentation in both 2D and 3D tasks. The source code is available at https://github.com/hsiangyuzhao/PANet.


Subject(s)
Magnetic Resonance Imaging , Neural Networks, Computer , Magnetic Resonance Imaging/methods , Neuroimaging/methods , Tomography, X-Ray Computed , Image Processing, Computer-Assisted/methods
18.
Methods ; 205: 200-209, 2022 09.
Article in English | MEDLINE | ID: covidwho-2255505

ABSTRACT

BACKGROUND: Lesion segmentation is a critical step in medical image analysis, and methods to identify pathology without time-intensive manual labeling of data are of utmost importance during a pandemic and in resource-constrained healthcare settings. Here, we describe a method for fully automated segmentation and quantification of pathological COVID-19 lung tissue on chest Computed Tomography (CT) scans without the need for manually segmented training data. METHODS: We trained a cycle-consistent generative adversarial network (CycleGAN) to convert images of COVID-19 scans into their generated healthy equivalents. Subtraction of the generated healthy images from their corresponding original CT scans yielded maps of pathological tissue, without background lung parenchyma, fissures, airways, or vessels. We then used these maps to construct three-dimensional lesion segmentations. Using a validation dataset, Dice scores were computed for our lesion segmentations and other published segmentation networks using ground truth segmentations reviewed by radiologists. RESULTS: The COVID-to-Healthy generator eliminated high Hounsfield unit (HU) voxels within pulmonary lesions and replaced them with lower HU voxels. The generator did not distort normal anatomy such as vessels, airways, or fissures. The generated healthy images had higher gas content (2.45 ± 0.93 vs 3.01 ± 0.84 L, P < 0.001) and lower tissue density (1.27 ± 0.40 vs 0.73 ± 0.29 Kg, P < 0.001) than their corresponding original COVID-19 images, and they were not significantly different from those of the healthy images (P < 0.001). Using the validation dataset, lesion segmentations scored an average Dice score of 55.9, comparable to other weakly supervised networks that do require manual segmentations. CONCLUSION: Our CycleGAN model successfully segmented pulmonary lesions in mild and severe COVID-19 cases. Our model's performance was comparable to other published models; however, our model is unique in its ability to segment lesions without the need for manual segmentations.


Subject(s)
COVID-19 , Image Processing, Computer-Assisted , COVID-19/diagnostic imaging , Humans , Image Processing, Computer-Assisted/methods , Lung/diagnostic imaging , Tomography, X-Ray Computed/methods
19.
Comput Biol Med ; 155: 106698, 2023 03.
Article in English | MEDLINE | ID: covidwho-2264677

ABSTRACT

The COVID-19 pandemic has extremely threatened human health, and automated algorithms are needed to segment infected regions in the lung using computed tomography (CT). Although several deep convolutional neural networks (DCNNs) have proposed for this purpose, their performance on this task is suppressed due to the limited local receptive field and deficient global reasoning ability. To address these issues, we propose a segmentation network with a novel pixel-wise sparse graph reasoning (PSGR) module for the segmentation of COVID-19 infected regions in CT images. The PSGR module, which is inserted between the encoder and decoder of the network, can improve the modeling of global contextual information. In the PSGR module, a graph is first constructed by projecting each pixel on a node based on the features produced by the encoder. Then, we convert the graph into a sparsely-connected one by keeping K strongest connections to each uncertainly segmented pixel. Finally, the global reasoning is performed on the sparsely-connected graph. Our segmentation network was evaluated on three publicly available datasets and compared with a variety of widely-used segmentation models. Our results demonstrate that (1) the proposed PSGR module can capture the long-range dependencies effectively and (2) the segmentation model equipped with this PSGR module can accurately segment COVID-19 infected regions in CT images and outperform all other competing models.


Subject(s)
COVID-19 , Image Processing, Computer-Assisted , Humans , Image Processing, Computer-Assisted/methods , Pandemics , Neural Networks, Computer , Tomography, X-Ray Computed/methods
20.
Comput Methods Programs Biomed ; 233: 107493, 2023 May.
Article in English | MEDLINE | ID: covidwho-2269449

ABSTRACT

BACKGROUND AND OBJECTIVE: Transformers profiting from global information modeling derived from the self-attention mechanism have recently achieved remarkable performance in computer vision. In this study, a novel transformer-based medical image segmentation network called the multi-scale embedding spatial transformer (MESTrans) was proposed for medical image segmentation. METHODS: First, a dataset called COVID-DS36 was created from 4369 computed tomography (CT) images of 36 patients from a partner hospital, of which 18 had COVID-19 and 18 did not. Subsequently, a novel medical image segmentation network was proposed, which introduced a self-attention mechanism to improve the inherent limitation of convolutional neural networks (CNNs) and was capable of adaptively extracting discriminative information in both global and local content. Specifically, based on U-Net, a multi-scale embedding block (MEB) and multi-layer spatial attention transformer (SATrans) structure were designed, which can dynamically adjust the receptive field in accordance with the input content. The spatial relationship between multi-level and multi-scale image patches was modeled, and the global context information was captured effectively. To make the network concentrate on the salient feature region, a feature fusion module (FFM) was established, which performed global learning and soft selection between shallow and deep features, adaptively combining the encoder and decoder features. Four datasets comprising CT images, magnetic resonance (MR) images, and H&E-stained slide images were used to assess the performance of the proposed network. RESULTS: Experiments were performed using four different types of medical image datasets. For the COVID-DS36 dataset, our method achieved a Dice similarity coefficient (DSC) of 81.23%. For the GlaS dataset, 89.95% DSC and 82.39% intersection over union (IoU) were obtained. On the Synapse dataset, the average DSC was 77.48% and the average Hausdorff distance (HD) was 31.69 mm. For the I2CVB dataset, 92.3% DSC and 85.8% IoU were obtained. CONCLUSIONS: The experimental results demonstrate that the proposed model has an excellent generalization ability and outperforms other state-of-the-art methods. It is expected to be a potent tool to assist clinicians in auxiliary diagnosis and to promote the development of medical intelligence technology.


Subject(s)
COVID-19 , Humans , COVID-19/diagnostic imaging , Electric Power Supplies , Hospitals , Learning , Neural Networks, Computer , Image Processing, Computer-Assisted
SELECTION OF CITATIONS
SEARCH DETAIL